136 research outputs found
Developing Real-Time Emergency Management Applications: Methodology for a Novel Programming Model Approach
The last years have been characterized by the arising of highly distributed computing
platforms composed of a heterogeneity of computing and communication resources including
centralized high-performance computing architectures (e.g. clusters or large shared-memory
machines), as well as multi-/many-core components also integrated into mobile nodes
and network facilities. The emerging of computational paradigms such as Grid and Cloud
Computing, provides potential solutions to integrate such platforms with data systems, natural
phenomena simulations, knowledge discovery and decision support systems responding to a
dynamic demand of remote computing and communication resources and services.
In this context time-critical applications, notably emergency management systems, are
composed of complex sets of application components specialized for executing specific
computations, which are able to cooperate in such a way as to perform a global goal in a
distributed manner. Since the last years the scientific community has been involved in facing
with the programming issues of distributed systems, aimed at the definition of applications
featuring an increasing complexity in the number of distributed components, in the spatial
distribution and cooperation between interested parties and in their degree of heterogeneity.
Over the last decade the research trend in distributed computing has been focused on
a crucial objective. The wide-ranging composition of distributed platforms in terms of
different classes of computing nodes and network technologies, the strong diffusion of
applications that require real-time elaborations and online compute-intensive processing as
in the case of emergency management systems, lead to a pronounced tendency of systems
towards properties like self-managing, self-organization, self-controlling and strictly speaking
adaptivity.
Adaptivity implies the development, deployment, execution and management of applications
that, in general, are dynamic in nature. Dynamicity concerns the number and the specific
identification of cooperating components, the deployment and composition of the most
suitable versions of software components on processing and networking resources and
services, i.e., both the quantity and the quality of the application components to achieve
the needed Quality of Service (QoS). In time-critical applications the QoS specification
can dynamically vary during the execution, according to the user intentions and the
Developing Real-Time Emergency
Management Applications: Methodology for
a Novel Programming Model Approach
Gabriele Mencagli and Marco Vanneschi
Department of Computer Science, University of Pisa, L. Bruno Pontecorvo, Pisa
Italy
2
2 Will-be-set-by-IN-TECH
information produced by sensors and services, as well as according to the monitored state
and performance of networks and nodes.
The general reference point for this kind of systems is the Grid paradigm which, by
definition, aims to enable the access, selection and aggregation of a variety of distributed and
heterogeneous resources and services. However, though notable advancements have been
achieved in recent years, current Grid technology is not yet able to supply the needed software
tools with the features of high adaptivity, ubiquity, proactivity, self-organization, scalability
and performance, interoperability, as well as fault tolerance and security, of the emerging
applications.
For this reason in this chapter we will study a methodology for designing high-performance
computations able to exploit the heterogeneity and dynamicity of distributed environments
by expressing adaptivity and QoS-awareness directly at the application level. An effective
approach needs to address issues like QoS predictability of different application configurations
as well as the predictability of reconfiguration costs. Moreover adaptation strategies need to
be developed assuring properties like the stability degree of a reconfiguration decision and the
execution optimality (i.e. select reconfigurations accounting proper trade-offs among different
QoS objectives). In this chapter we will present the basic points of a novel approach that lays
the foundations for future programming model environments for time-critical applications
such as emergency management systems.
The organization of this chapter is the following. In Section 2 we will compare the existing
research works for developing adaptive systems in critical environments, highlighting their
drawbacks and inefficiencies. In Section 3, in order to clarify the application scenarios that
we are considering, we will present an emergency management system in which the run-time
selection of proper application configuration parameters is of great importance for meeting the
desired QoS constraints. In Section 4we will describe the basic points of our approach in terms
of how compute-intensive operations can be programmed, how they can be dynamically
modified and how adaptation strategies can be expressed. In Section 5 our approach will
be contextualize to the definition of an adaptive parallel module, which is a building block
for composing complex and distributed adaptive computations. Finally in Section 6 we will
describe a set of experimental results that show the viability of our approach and in Section 7
we will give the concluding remarks of this chapter
Few-Shot Learning for Post-Earthquake Urban Damage Detection
Koukouraki, E., Vanneschi, L., & Painho, M. (2022). Few-Shot Learning for Post-Earthquake Urban Damage Detection. Remote Sensing, 14(1), 1-20. [40]. https://doi.org/10.3390/rs14010040 ------------------------------
Funding: This study was partially supported by FCT, Portugal, through funding of projects BINDER (PTDC/CCI-INF/29168/2017) and AICE DSAIPA/DS/0113/2019).
E.K. would like to acknowledge the Erasmus Mundus scholarship program,
for providing the context and financial support to carry out this study, through the admission to the Master of Science in Geospatial Technologies.Among natural disasters, earthquakes are recorded to have the highest rates of human loss in the past 20 years. Their unexpected nature has severe consequences on both human lives and material infrastructure, demanding urgent action to be taken. For effective emergency relief, it is necessary to gain awareness about the level of damage in the affected areas. The use of remotely sensed imagery is popular in damage assessment applications; however, it requires a considerable amount of labeled data, which are not always easy to obtain. Taking into consideration the recent developments in the fields of Machine Learning and Computer Vision, this study investigates and employs several Few-Shot Learning (FSL) strategies in order to address data insufficiency and imbalance in post-earthquake urban damage classification. While small datasets have been tested against binary classification problems, which usually divide the urban structures into collapsed and non-collapsed, the potential of limited training data in multi-class classification has not been fully explored. To tackle this gap, four models were created, following different data balancing methods, namely cost-sensitive learning, oversampling, undersampling and Prototypical Networks. After a quantitative comparison among them, the best performing model was found to be the one based on Prototypical Networks, and it was used for the creation of damage assessment maps. The contribution of this work is twofold: we show that oversampling is the most suitable data balancing method for training Deep Convolutional Neural Networks (CNN) when compared to cost-sensitive learning and undersampling, and we demonstrate the appropriateness of Prototypical Networks in the damage classification context.publishersversionpublishe
The home-forwarding mechanism to reduce the cache coherence overhead in next-generation CMPs
On the road to computer systems able to support the requirements of exascale applications, Chip Multi-Processors (CMPs) are equipped with an ever increasing number of cores interconnected through fast on-chip networks. To exploit such new architectures, the parallel software must be able to scale almost linearly with the number of cores available. To this end, the overhead introduced by the run-time system of parallel programming frameworks and by the architecture itself must be small enough in order to enable high scalability also for very fine-grained parallel programs. An approach to reduce this overhead is to use non-conventional architectural mechanisms revealing useful when certain concurrency patterns in the running application are statically or dynamically recognized. Following this idea, this paper proposes a run-time support able to reduce the effective latency of inter-thread cooperation primitives by lowering the contention on individual caches. To achieve this goal, the new home-forwarding hardware mechanism is proposed and used by our runtime in order to reduce the amount of cache-to-cache interactions generated by the cache coherence protocol. Our ideas have been emulated on the Tilera TILEPro64 CMP, showing a significant speedup improvement in some first benchmarks
A GP approach for precision farming
Abbona, F., Vanneschi, L., Bona, M., & Giacobini, M. (2020). A GP approach for precision farming. In 2020 IEEE Congress on Evolutionary Computation, CEC : 2020 Conference Proceedings (pp. 1-8). [9185637] (2020 IEEE Congress on Evolutionary Computation, CEC 2020 - Conference Proceedings). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CEC48606.2020.9185637Livestock is increasingly treated not just as food containers, but as animals that can be susceptible to stress and diseases, affecting, therefore, the production of offspring and the performance of the farm. The breeder needs a simple and useful tool to make the best decisions for his farm, as well as being able to objectively check whether the choices and investments made have improved or worsened its performance. The amount of data is huge but often dispersive: it is therefore essential to provide the farmer with a clear and comprehensible solution, that represents an additional investment. This research proposes a genetic programming approach to predict the yearly number of weaned calves per cow of a farm, namely the measure of its performance. To investigate the efficiency of genetic programming in such a problem, a dataset composed by observations on representative Piedmontese breedings was used. The results show that the algorithm is appropriate, and can perform an implicit feature selection, highlighting important variables and leading to simple and interpretable models.authorsversionpublishe
- …